Identifying Parkinson's Patients: A Functional Gradient Boosting Approach
نویسندگان
چکیده
Parkinson's, a progressive neural disorder, is difficult to identify due to the hidden nature of the symptoms associated. We present a machine learning approach that uses a definite set of features obtained from the Parkinsons Progression Markers Initiative(PPMI) study as input and classifies them into one of two classes: PD(Parkinson's disease) and HC(Healthy Control). As far as we know this is the first work in applying machine learning algorithms for classifying patients with Parkinson's disease with the involvement of domain expert during the feature selection process. We evaluate our approach on 1194 patients acquired from Parkinsons Progression Markers Initiative and show that it achieves a state-of-the-art performance with minimal feature engineering.
منابع مشابه
Unbiased Conjugate Direction Boosting for Conditional Random Fields
Conditional Random Fields (CRFs) currently receive a lot of attention for labeling sequences. To train CRFs, Dietterich et al. proposed a functional gradient optimization approach: the potential functions are represented as weighted sums of regression trees that are induced using Friedman’s gradient tree boosting method. In this paper, we improve upon this approach in two ways. First, we identi...
متن کاملImitation Learning in Relational Domains: A Functional-Gradient Boosting Approach
Imitation learning refers to the problem of learning how to behave by observing a teacher in action. We consider imitation learning in relational domains, in which there is a varying number of objects and relations among them. In prior work, simple relational policies are learned by viewing imitation learning as supervised learning of a function from states to actions. For propositional worlds,...
متن کاملBoosting Methods: Why they can be useful for High-Dimensional Data
We present an extended abstract about boosting. We describe first in section 1 (in a self-contained way) a generic functional gradient descent algorithm, which yields a general representation of boosting. Properties of boosting or functional gradient descent are then very briefly summarized in section 2.
متن کاملFunctional gradient ascent for Probit regression
This paper proposes two gradient based methods to fit a Probit regression model by maximizing the sample log-likelihood function. Using the property of the Hessian of the objective function, the first method performs weighted least square regression in each iteration of the Newton–Raphson framework, resulting in ProbitBoost, a boosting-like algorithm. Motivated by the gradient boosting algorith...
متن کاملOptimization by gradient boosting
Gradient boosting is a state-of-the-art prediction technique that sequentially produces a model in the form of linear combinations of simple predictors—typically decision trees—by solving an infinite-dimensional convex optimization problem. We provide in the present paper a thorough analysis of two widespread versions of gradient boosting, and introduce a general framework for studying these al...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Artificial intelligence in medicine : 16th Conference on Artificial Intelligence in Medicine, Aime 2017, Vienna, Austria, June 21-24, 2017, Proceedings. Conference on Artificial Intelligence in Medicine (2005-) (16th : 2017 : Vienna, Au...
دوره 10259 شماره
صفحات -
تاریخ انتشار 2017